Last Update: 7/13/2025
LLMProvider Release Notes
🚀 API Updates
Version: 2025-07-13
- New: 新增缓存功能以提高响应性能和成本优化。
- New: 支持批量调用以高效处理多个请求。
- New: 新增grok相关模型。
- New: 新增Claude Code相关使用文档。
- New: 增强系统的稳定性。
🚀 API Updates
Version: 2025-01-15
- New: Cache functionality for improved response performance and cost optimization
- New: Batch calling support for processing multiple requests efficiently
- New: Batch model endpoints for handling bulk operations
- Added: Cache management APIs for request/response caching
- Added: Batch processing endpoints with configurable batch sizes
- Improved: Performance optimization through intelligent caching mechanisms
- Enhanced: API throughput with parallel batch processing capabilities
📚 Documentation Updates
Version: 2025-02-22
- Fixed: Removed unnecessary
conversation_id
parameter from Chat API examples across all providers - Fixed: Corrected parameter naming inconsistencies (
tools_choice
→tool_choice
) - Fixed: Updated Gemini API documentation model description
- Fixed: Completed truncated Python code examples
- Fixed: Spelling errors in Error codes documentation
- Added: Documentation Style Guide for consistency
- Added: API testing script template for validation
- Improved: README.md with comprehensive project information
🔄 Migration Guide
From Previous Versions
If you're updating your integration based on previous documentation:
- Remove
conversation_id
parameter from standard chat completion requests - Update parameter names: Change
tools_choice
totool_choice
- Update model names: Ensure you're using current model identifiers
- Add error handling: Include proper error handling in your code
Breaking Changes
- None in this release. All changes are documentation improvements and don't affect API functionality.
For technical support, contact us at [email protected]